18 research outputs found

    Multi-Camera Platform for Panoramic Real-Time HDR Video Construction and Rendering

    Get PDF
    High dynamic range (HDR) images are usually obtained by capturing several images of the scene at different exposures. Previous HDR video techniques adopted the same principle by stacking HDR frames in time domain. We designed a new multi-camera platform which is able to construct and render HDR panoramic video in real-time, with 1024 Ă— 256 resolution and a frame rate of 25 fps. We exploit the overlapping fields-of-view between the cameras with different exposures to create an HDR radiance map. We propose a method for HDR frame reconstruction which merges the previous HDR imaging techniques with the algorithms for panorama reconstruction. The developed FPGA-based processing system is able to reconstruct the HDR frame using the proposed method and tone map the resulting image using a hardware-adapted global operator. The measured throughput of the system is 245 MB/s, which is, up to our knowledge, among the fastest HDR video processing systems

    Image Blending in a High Frame Rate FPGA-based Multi-Camera System

    Get PDF
    Panoptic is a custom spherical light field camera used as a polydioptric system where imagers are distributed over a hemispherical surface, each having its own vision of the surroundings and a distinct focal plane. The spherical light field camera records light information from any direction around its center. This paper revises previously developed Nearest Neighbor and Linear blending techniques. Novel Gaussian blending and Restricted Gaussian blending techniques for vision reconstruction of a virtual observer located inside the spherical geometry are presented. These new blending techniques improve the output quality of the reconstructed image with respect to the ordinary stitching techniques and simpler image blending algorithms. A comparison of the developed blending algorithms is also given in this paper. A hardware architecture based on Field Programmable Gate Arrays (FPGA) enabling the real-time implementation of the blending algorithms is presented, along with the imaging results and resource utilization comparison. A recorded omnidirectional video is attached as a supplementary material

    PANOPTIC An Interconnected Network of Cameras

    No full text
    Multi-camera systems have attracted attention in recent years due to rapidly dropping cost of digital cameras. This has enabled wide variety of new research topics and applications for Multi View Imaging (MVI) systems. Virtual view synthesis, high performance imaging, image/video segmentation, depth map estimation, 3DTV, and free viewpoint TV are examples of such applications. High data rates, camera synchronization and calibration make the real-time deployment of these systems a challenging task. Most built multi-camera systems were designed as multi-video recording systems, where their target applications were processed offline on computers. In a centralized processing approach, all the computation of a multi-camera system takes place on a single central processing unit (e.g. PC). This approach does not enable real-time deployment if the number of cameras is high. The processing demand and connectivity problem from the cameras to a single unit, are limiting factors in this approach. One solution for real-time deployment is to distribute and run in parallel the target applications at the camera level and among the cameras. This method assumes intra (at the camera level) and inter (among the camera) signal processing for the cameras. In the inter signal processing, exchange of information (image/data) takes place among the cameras. Hence a communication medium is required for inter signal processing. This thesis introduces a communication medium for the support of inter signal processing in a multi-camera system based on the interconnected network concept. A methodology is introduced for network topology selection, camera assignments to network nodes, performance analysis and simulation of the network for any target application

    Hardware Implementation and Applications of High Performance Multiple Camera Architectures

    No full text
    Advancements in imaging technology made commercially available cameras cheaper, easily accessible with higher resolution than ever and with complex image capturing features. Today, it is estimated that more than one billion cameras sold every year. However, the progress on imaging technology is slowly reaching its limits imposed by the nature of the technology. Diffraction limits the number of effective pixels in a sensor, technology does not allow pixels to be smaller and lens systems are becoming more complex as sensor technology continues to advance. In order to enhance the capabilities of traditional imaging systems, researchers are increasing the available computing power by combining imagers with FPGAs and GPUs. Enormous computational power provided by modern computational systems is increasing the imaging capabilities of the current cameras. One way is to combine multiple cameras with FPGAs to create new possibilities for image capturing systems. This thesis focuses on FPGA based camera systems and applications. The goal of the introduced multiple camera systems in this work is to create real-time videos with wider field-of-view by distributing the tasks among the camera nodes. It is achieved by carefully placing multiple cameras on a hemispherical dome and adding communication features among the cameras. Cameras are able to create 360 degree view by utilizing the smart features added on the FPGAs. Designed distributed algorithm shares the reconstruction load evenly among the nodes, thus reduces the problems caused by the additional cameras in the multiple camera systems. Presented Panoptic system achieves higher performance omnidirectional video compared to its previous implementations. This thesis also introduces a head mounted display based viewing systems to render omnidirectional videos for human visual system. Another important aspect focused on this thesis is real-time resolution enhancement. Increasing resolution of the current imaging systems after fabrication can be achieved by enhancement methods during post processing. To this aim, a real-time image registration and real-time super-resolution algorithms are designed to be implemented on FPGA based camera systems. The real-time image registration algorithm aims to calculate the optical flow between the images to understand motion among the observations. The algorithm allows images to be registered in a finer grid and allows the super-resolution algorithm to enhance the image. This thesis introduces hardware implementations for super-resolution algorithms that are previously introduced in the literature. Many super-resolution methods are complex and computationally expensive implementations, thus real-time implementation is a challenging issue. We will discuss the super-resolution algorithms and provide hardware implementations for two well known super-resolution algorithms. Last but not least, combining the industry standard MIPI communication scheme with FPGAs are discussed. MIPI based systems are becoming the industry standard for chip to chip communication schemes. However, MIPI is not compatible with FPGA systems due to physical limitations and fast transmission speed. We aim to provide guidelines for next generation multiple camera systems consists of MIPI based cameras. Finally, this thesis is concluded by the discussions on next generation multiple camera systems, applications and future directions

    Hardware implementation of real-time multiple frame super-resolution

    No full text
    Super-resolution reconstruction is a method for reconstructing higher resolution images from a set of low resolution observations. The sub-pixel differences among different observations of the same scene allow to create higher resolution images with better quality. In the last thirty years, many methods for creating high resolution images have been proposed. However, hardware implementations of such methods are limited. In this work, highly parallel and pipelined implementation for iterative back projection super-resolution algorithm is presented. The proposed hardware implementation is capable of reconstructing 512Ă—512 sized images from set of 20 lower resolution observations, with real-time capabilities up to 25 frame per second (fps). Explained system has been synthesized and verified via Xilinx VC707 FPGAs. To the best of our knowledge, the system is currently the fastest super-resolution implementation based on FPGA

    Block Matching Based Real-Time Optical Flow Hardware Implementation

    No full text
    Optical flow calculation algorithms are hard to implement on hardware level in real-time due to their complexity and high computational load. In this work, we present a novel hierarchical block matching based optical flow algorithm. The algorithm estimates the initial optical flow with block matching based methods, and refines the vectors with local smoothness constraints in each hierarchy level. We evaluate the proposed algorithm with novel data sets and provide results compared to ground truth optical flow. Furthermore, we present a hardware architecture of the proposed algorithm for calculating optical flow in real-time. The presented design can process 640x480 resolution at 26 frames per second (fps)

    REAL-TIME HARDWARE IMPLEMENTATION OF MULTI-RESOLUTION IMAGE BLENDING

    Get PDF
    A novel real-time implementation of a multi-resolution image blending algorithm is presented in this paper. A multi-resolution decomposition of the input is used to blend multiple images at different scales. Processing time is shortened by designing a pipeline system. The proposed solution requires less hardware multipliers and is able to achieve very high operating frequencies, compared to the current designs. The presented hardware architecture is optimized to support multiple simultaneous video streams, and high frame rates at High-Definition (HD) resolutions
    corecore